在本文中,我们开发Faceqgen,基于生成的对抗网络的面部图像的No参考质量评估方法,其产生与面部识别精度相关的标量质量测量。 Faceqgen不需要标记为培训的质量措施。它从使用SCFace数据库从头开始培训。 Faceqgen将图像恢复应用于未知质量的面部图像,将其转换为规范的高质量图像,即正面姿势,均匀的背景等。质量估计是原始图像和恢复图像之间的相似性,因为低质量图像由于恢复而体验更大的变化。我们比较三种不同的数值质量措施:a)原始和恢复的图像之间的MSE,b)他们的SSIM和c)甘杆菌鉴别器的输出得分。结果表明,面部QGEN的质量措施是面部识别准确性的良好估计。我们的实验包括与针对面部和一般图像设计的其他质量评估方法的比较,以便在现有技术中定位面部。这种比较表明,即使面对面识别准确性预测方面不超过最佳现有的面部质量评估方法,它也实现了足够的结果,以证明质量估计的半监督学习方法的潜力(特别是数据 - 基于每个受试者的单一高质量图像的驱动学习),具有提高未来性能的能力,通过对模型的充分改进以及竞争方法的显着优势,不需要质量标签的发展。这使得Faceqgen灵活且可扩展,而无需昂贵的数据策激。
translated by 谷歌翻译
可取消的生物识别性是指一组技术,其中生物识别输入在处理或存储前用键有意地转换。该转换是可重复的,可以实现后续生物特征比较。本文介绍了一种可消除生物识别性的新方案,旨在保护模板免受潜在攻击,适用于任何基于生物识别的识别系统。我们所提出的方案基于从变形随机生物识别信息获得的时变键。给出了面部生物识别技术的实验实施。结果证实,该方法能够在提高识别性能的同时抵抗泄漏攻击。
translated by 谷歌翻译
本文是第一个探索自动检测深度卷积神经网络中的自动化方法,只需查看其权重。此外,它也是了解神经网络以及它们的工作方式。我们表明,确实可以知道模型是否偏离或不仅仅是通过查看其权重,而没有特定输入的模型推断。我们分析了使用彩色MNIST数据库的玩具示例在深网络的权重中编码偏差,并且我们还提供了使用最先进的方法和实验资源从面部图像进行性别检测的现实案例研究。为此,我们生成了两个具有36k和48K偏置模型的数据库。在MNIST模型中,我们能够检测它们是否具有超过99%的精度呈现强大或低偏差,我们还能够在四个级别的偏差之间进行分类,精度超过70%。对于面部模型,我们在区分偏向亚洲,黑人或高加索人的型号的模型方面取得了90%的准确性。
translated by 谷歌翻译
Spacecraft pose estimation is a key task to enable space missions in which two spacecrafts must navigate around each other. Current state-of-the-art algorithms for pose estimation employ data-driven techniques. However, there is an absence of real training data for spacecraft imaged in space conditions due to the costs and difficulties associated with the space environment. This has motivated the introduction of 3D data simulators, solving the issue of data availability but introducing a large gap between the training (source) and test (target) domains. We explore a method that incorporates 3D structure into the spacecraft pose estimation pipeline to provide robustness to intensity domain shift and we present an algorithm for unsupervised domain adaptation with robust pseudo-labelling. Our solution has ranked second in the two categories of the 2021 Pose Estimation Challenge organised by the European Space Agency and the Stanford University, achieving the lowest average error over the two categories.
translated by 谷歌翻译
Petrov-Galerkin formulations with optimal test functions allow for the stabilization of finite element simulations. In particular, given a discrete trial space, the optimal test space induces a numerical scheme delivering the best approximation in terms of a problem-dependent energy norm. This ideal approach has two shortcomings: first, we need to explicitly know the set of optimal test functions; and second, the optimal test functions may have large supports inducing expensive dense linear systems. Nevertheless, parametric families of PDEs are an example where it is worth investing some (offline) computational effort to obtain stabilized linear systems that can be solved efficiently, for a given set of parameters, in an online stage. Therefore, as a remedy for the first shortcoming, we explicitly compute (offline) a function mapping any PDE-parameter, to the matrix of coefficients of optimal test functions (in a basis expansion) associated with that PDE-parameter. Next, as a remedy for the second shortcoming, we use the low-rank approximation to hierarchically compress the (non-square) matrix of coefficients of optimal test functions. In order to accelerate this process, we train a neural network to learn a critical bottleneck of the compression algorithm (for a given set of PDE-parameters). When solving online the resulting (compressed) Petrov-Galerkin formulation, we employ a GMRES iterative solver with inexpensive matrix-vector multiplications thanks to the low-rank features of the compressed matrix. We perform experiments showing that the full online procedure as fast as the original (unstable) Galerkin approach. In other words, we get the stabilization with hierarchical matrices and neural networks practically for free. We illustrate our findings by means of 2D Eriksson-Johnson and Hemholtz model problems.
translated by 谷歌翻译
To alleviate the problem of structured databases' limited coverage, recent task-oriented dialogue systems incorporate external unstructured knowledge to guide the generation of system responses. However, these usually use word or sentence level similarities to detect the relevant knowledge context, which only partially capture the topical level relevance. In this paper, we examine how to better integrate topical information in knowledge grounded task-oriented dialogue and propose ``Topic-Aware Response Generation'' (TARG), an end-to-end response generation model. TARG incorporates multiple topic-aware attention mechanisms to derive the importance weighting scheme over dialogue utterances and external knowledge sources towards a better understanding of the dialogue history. Experimental results indicate that TARG achieves state-of-the-art performance in knowledge selection and response generation, outperforming previous state-of-the-art by 3.2, 3.6, and 4.2 points in EM, F1 and BLEU-4 respectively on Doc2Dial, and performing comparably with previous work on DSTC9; both being knowledge-grounded task-oriented dialogue datasets.
translated by 谷歌翻译
Video provides us with the spatio-temporal consistency needed for visual learning. Recent approaches have utilized this signal to learn correspondence estimation from close-by frame pairs. However, by only relying on close-by frame pairs, those approaches miss out on the richer long-range consistency between distant overlapping frames. To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences. Our approach combines pairwise correspondence estimation and registration with a novel SE(3) transformation synchronization algorithm. Our key insight is that self-supervised multiview registration allows us to obtain correspondences over longer time frames; increasing both the diversity and difficulty of sampled pairs. We evaluate our approach on indoor scenes for correspondence estimation and RGB-D pointcloud registration and find that we perform on-par with supervised approaches.
translated by 谷歌翻译
In this work we propose a novel token-based training strategy that improves Transformer-Transducer (T-T) based speaker change detection (SCD) performance. The conventional T-T based SCD model loss optimizes all output tokens equally. Due to the sparsity of the speaker changes in the training data, the conventional T-T based SCD model loss leads to sub-optimal detection accuracy. To mitigate this issue, we use a customized edit-distance algorithm to estimate the token-level SCD false accept (FA) and false reject (FR) rates during training and optimize model parameters to minimize a weighted combination of the FA and FR, focusing the model on accurately predicting speaker changes. We also propose a set of evaluation metrics that align better with commercial use cases. Experiments on a group of challenging real-world datasets show that the proposed training method can significantly improve the overall performance of the SCD model with the same number of parameters.
translated by 谷歌翻译
Autonomous underwater vehicles (AUVs) are becoming standard tools for underwater exploration and seabed mapping in both scientific and industrial applications \cite{graham2022rapid, stenius2022system}. Their capacity to dive untethered allows them to reach areas inaccessible to surface vessels and to collect data more closely to the seafloor, regardless of the water depth. However, their navigation autonomy remains bounded by the accuracy of their dead reckoning (DR) estimate of their global position, severely limited in the absence of a priori maps of the area and GPS signal. Global localization systems equivalent to the later exists for the underwater domain, such as LBL or USBL. However they involve expensive external infrastructure and their reliability decreases with the distance to the AUV, making them unsuitable for deep sea surveys.
translated by 谷歌翻译
In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3 - 6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ${\pm}$ 0.0544 m.
translated by 谷歌翻译